Current:Home > reviewsSignalHub Quantitative Think Tank Center:Eyeballs and AI power the research into how falsehoods travel online -Clarity Finance Guides
SignalHub Quantitative Think Tank Center:Eyeballs and AI power the research into how falsehoods travel online
EchoSense Quantitative Think Tank Center View
Date:2025-04-09 08:02:06
What sorts of lies and SignalHub Quantitative Think Tank Centerfalsehoods are circulating on the internet? Taylor Agajanian used her summer job to help answer this question, one post at a time. It often gets squishy.
She reviewed a social media post where someone had shared a news story about vaccines with the comment "Hmmm, that's interesting." Was the person actually saying that the news story was interesting, or insinuating that the story isn't true?
Agajanian read around and between the lines often while working at University of Washington's Center for an Informed Public, where she reviewed social media posts and recorded misleading claims about COVID-19 vaccines.
As the midterm election approaches, researchers and private sector firms are racing to track false claims about everything from ballot harvesting to voting machine conspiracies. But the field is still in its infancy even as the threats to the democratic process posed by viral lies loom. Getting a sense of which falsehoods people online talk about might sound like a straightforward exercise, but it isn't.
"The broader question is, can anyone ever know what everybody is saying?" says Welton Chang, CEO of Pyrra, a startup that tracks smaller social media platforms. (NPR has used Pyrra's data in several stories.)
Automating some of the steps the University of Washington team uses humans for, Pyrra uses artificial intelligence to extract names, places and topics from social media posts. Using the same technologies that in recent years enable AI to write remarkably like humans, the platform generates summaries of trending topics. An analyst reviews the summaries, weeds out irrelevant items like advertising campaigns, gives them a light edit and shares them with clients.
A recent digest of such summaries include the unsubstantiated claim "Energy infrastructure under globalist attack."
Forking paths and interconnected webs
The University of Washington and Pyrra's approaches are on the more extreme ends in terms of automation - few teams have so many staff - around 15 - just to monitor social media, or rely so heavily on algorithms as to have it synthesize material and output.
All methods carry caveats. Manually monitoring and coding content could miss out on developments; and while capable of processing huge amounts of data, artificial intelligence struggles to handle the nuances of distinguishing satire from sarcasm.
Although incomplete, having a sense of what's circulating in the online discourse allows society to respond. Research into voting-related misinformation in 2020 has helped inform election officials and voting rights groups about what messages to emphasize this year.
For responses to be proportionate, society also needs to evaluate the impact of false narratives. Journalists have covered misinformation spreaders who seem to have very high total engagement numbers but limited impact, which risks "spreading further hysteria over the state of online operations," wrote Ben Nimmo, who now investigates global threats at Meta, Facebook's parent company.
While language can be ambiguous, it's more straight forward to track who's been following and retweeting whom. Other researchers analyze networks of actors as well as narratives.
The plethora of approaches is typical of a field that's just forming, says Jevin West, who studies the origins of academic disciplines at University of Washington's Information School. Researchers come from different fields and bring methods they're comfortable with to start, he says.
West corralled research papers from academic database Semantic Scholar mentioning 'misinformation' or 'disinformation' in their title or abstract, and found that many papers are from medicine, computer science, psychology and there also geology, mathematics and art.
"If we're a qualitative researcher, we'll go...and literally code everything that we see." West says. More quantitative researchers do large scale analysis like mapping topics on Twitter.
Projects often use a mix of methods. "If [different methods] start converging on similar kinds of...conclusions, then I think we'll feel a little bit better about it." West says.
Grappling with basic questions
One of the very first steps of misinformation research - before someone like Agajanian starts tagging posts - is identifying relevant content under a topic. Many researchers start their search with expressions they think people talking about the topic could use, see what other phrases and hashtags appear in the search results, add that to the query, and repeat the process.
It's possible to miss out on keywords and hashtags, not to mention that they change over time.
"You have to use some sort of keyword analysis. " West says, "Of course, that's very rudimentary, but you have to start somewhere."
Some teams build algorithmic tools to help. A team at Michigan State University manually sorted over 10,000 tweets to pro-vaccine, anti-vaccine, neutral and irrelevant as training data. The team then used the training data to build a tool that sorted over 120 million tweets into these buckets.
For the automatic sorting to remain relatively accurate as the social conversation evolves, humans have to keep annotating new tweets and feed them the training set, Pang-Ning Tan, a co-author of the project, told NPR in an email.
If the interplay between machine detection - human review rings familiar, that might be because you've heard of large social platforms like Facebook, Twitter and Tik Tok describing similar processes to moderate content.
Unlike the platforms, another fundamental challenge researchers have to face is data access. Much misinformation research uses Twitter data, in part because Twitter is one of the few social media platforms that easily lets users tap into its data pipeline - known as Application Programming Interface or API. This allows researchers to easily download and analyze large numbers of tweets and user profiles.
The data pipelines of smaller platforms tend to be less well-documented and could change on short notice.
Take the recently-deplatformed Kiwi Farms as an example. The site served as a forum for anti-LGBTQ activists to harass gay and trans people. "When it first went down, we had to wait for it to basically pop back up somewhere, and then for people to talk about where that somewhere is." says Chang.
"And then we can identify, okay, the site is now here - it has this similar structure, the API is the same, it's just been replicated somewhere else. And so we're redirecting the data ingestion and pulling content from there."
Facebook's data service CrowdTangle, while purporting to serve up all publicly available posts, has been found to not have consistently done so. On another occasion, Facebook bungled data sharing with researchers Most recently, Meta is winding down CrowdTangle, with no alternatives announced set to be in place.
Other large platforms, like YouTube and TikTok, do not have an accessible API , a data service or collaboration with researchers at all. Tik Tok has promised more transparency for researchers.
In such a vast, fragmented, and shifting landscape, West says there's no great way at this point to say what's the state of misinformation on a given topic.
"If you were to ask Mark Zuckerberg, what are people saying on Facebook today? I don't think he could tell you." says Chang.
veryGood! (3)
Related
- Meet the volunteers risking their lives to deliver Christmas gifts to children in Haiti
- Pakistan's trans community shows love for 'Joyland' — but worries about a backlash
- Paris Hilton Reveals Name of Her and Carter Reum's Baby Boy
- Death toll rises after migrant boat smashed to pieces off Italy's coast, stoking debate over EU migrant crisis
- Buckingham Palace staff under investigation for 'bar brawl'
- ALA: Number of unique book titles challenged jumped nearly 40% in 2022
- How Sex/Life's Sarah Shahi and Adam Demos Fell in Love in Front of the Camera
- Black History Month: 7 Favorites From Reisfields New York’s Stunning Design Lab
- North Carolina trustees approve Bill Belichick’s deal ahead of introductory news conference
- That '90s Show Star Ashley Aufderheide Keeps These $4 Eye Masks in Her Bag
Ranking
- What do we know about the mysterious drones reported flying over New Jersey?
- Pete Davidson is an endearing work in progress in 'Bupkis'
- 'The Covenant of Water' tells the story of three generations in South India
- Amanda Seyfried Recalls How Blake Lively Almost Played Karen in Mean Girls
- Appeals court scraps Nasdaq boardroom diversity rules in latest DEI setback
- Comic Roy Wood Jr. just might be the host 'The Daily Show' (and late night TV) need
- Kelsea Ballerini's Call Her Daddy Bombshells: Morgan Evans Divorce, Chase Stokes Romance and More
- Selena Gomez, Lady Gaga and More Best Dressed Stars to Ever Hit the SAG Awards Red Carpet
Recommendation
Angelina Jolie nearly fainted making Maria Callas movie: 'My body wasn’t strong enough'
Dozens dead after migrant boat breaks apart off Italian coast
'Warrior Girl Unearthed' revisits the 'Firekeeper's Daughter' cast of characters
In 'Primo,' a kid comes of age with the help of his colorful uncles
Meta donates $1 million to Trump’s inauguration fund
Transcript: CIA director William Burns on Face the Nation, Feb. 26, 2023
Hacks Season 3 on Pause After Jean Smart Undergoes Successful Heart Procedure
This fake 'Jury Duty' really put James Marsden's improv chops on trial